估计值函数是增强学习算法的核心组件。时间差异(TD)学习算法使用自引导,即,它们在随后的时间步骤中使用值估计更新朝向学习目标的值函数。或者,可以朝着通过单独预测继承人特征(SF)构成的学习目标来更新值函数 - 依赖于策略的模型 - 并将它们与瞬时奖励相结合。我们专注于在估计值函数时使用的自举目标,并提出新的备份目标,它是\ eta $ -return混合的混合,它隐含地结合了价值预测知识(由TD方法使用)与(继承人)特征预测知识 - 使用参数$ \ eta $捕获每个依赖的多少。我们说明通过$ \ eta \ gamma $ -dicounted sf模型结合了预测知识使得更有效地利用采样体验,而不是完全在价值函数估计上,或者在单独估计的继承功能的乘积上启动。和瞬时奖励模型。我们经验显示这种方法导致更快的政策评估和更好的控制性能,用于表格和非线性函数近似,指示可扩展性和一般性。
translated by 谷歌翻译
我们简要介绍了从实验神经科学的研究结果对生物学学习的共同假设,并以经常性神经网络的梯度学习效率对比。本评论中讨论的关键问题包括:突触可塑性,神经电路,理论实验划分和客观功能。我们在设计新的研究时,我们的建议与理论和实验神经科学家的建议有助于为这些问题带来清晰度。
translated by 谷歌翻译
When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
When presented with a data stream of two statistically dependent variables, predicting the future of one of the variables (the target stream) can benefit from information about both its history and the history of the other variable (the source stream). For example, fluctuations in temperature at a weather station can be predicted using both temperatures and barometric readings. However, a challenge when modelling such data is that it is easy for a neural network to rely on the greatest joint correlations within the target stream, which may ignore a crucial but small information transfer from the source to the target stream. As well, there are often situations where the target stream may have previously been modelled independently and it would be useful to use that model to inform a new joint model. Here, we develop an information bottleneck approach for conditional learning on two dependent streams of data. Our method, which we call Transfer Entropy Bottleneck (TEB), allows one to learn a model that bottlenecks the directed information transferred from the source variable to the target variable, while quantifying this information transfer within the model. As such, TEB provides a useful new information bottleneck approach for modelling two statistically dependent streams of data in order to make predictions about one of them.
translated by 谷歌翻译
Reinforcement learning (RL) algorithms have achieved notable success in recent years, but still struggle with fundamental issues in long-term credit assignment. It remains difficult to learn in situations where success is contingent upon multiple critical steps that are distant in time from each other and from a sparse reward; as is often the case in real life. Moreover, how RL algorithms assign credit in these difficult situations is typically not coded in a way that can rapidly generalize to new situations. Here, we present an approach using offline contrastive learning, which we call contrastive introspection (ConSpec), that can be added to any existing RL algorithm and addresses both issues. In ConSpec, a contrastive loss is used during offline replay to identify invariances among successful episodes. This takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon than it is to prospectively predict reward at every step taken in the environment. ConSpec stores this knowledge in a collection of prototypes summarizing the intermediate states required for success. During training, arrival at any state that matches these prototypes generates an intrinsic reward that is added to any external rewards. As well, the reward shaping provided by ConSpec can be made to preserve the optimal policy of the underlying RL agent. The prototypes in ConSpec provide two key benefits for credit assignment: (1) They enable rapid identification of all the critical states. (2) They do so in a readily interpretable manner, enabling out of distribution generalization when sensory features are altered. In summary, ConSpec is a modular system that can be added to any existing RL algorithm to improve its long-term credit assignment.
translated by 谷歌翻译
Researchers produce thousands of scholarly documents containing valuable technical knowledge. The community faces the laborious task of reading these documents to identify, extract, and synthesize information. To automate information gathering, document-level question answering (QA) offers a flexible framework where human-posed questions can be adapted to extract diverse knowledge. Finetuning QA systems requires access to labeled data (tuples of context, question and answer). However, data curation for document QA is uniquely challenging because the context (i.e. answer evidence passage) needs to be retrieved from potentially long, ill-formatted documents. Existing QA datasets sidestep this challenge by providing short, well-defined contexts that are unrealistic in real-world applications. We present a three-stage document QA approach: (1) text extraction from PDF; (2) evidence retrieval from extracted texts to form well-posed contexts; (3) QA to extract knowledge from contexts to return high-quality answers -- extractive, abstractive, or Boolean. Using QASPER for evaluation, our detect-retrieve-comprehend (DRC) system achieves a +7.19 improvement in Answer-F1 over existing baselines while delivering superior context selection. Our results demonstrate that DRC holds tremendous promise as a flexible framework for practical scientific document QA.
translated by 谷歌翻译
期望与成功采用AI来创新和改善业务之间仍然存在很大的差距。由于深度学习的出现,AI的采用率更为复杂,因为它经常结合大数据和物联网,从而影响数据隐私。现有的框架已经确定需要专注于以人为中心的设计,结合技术和业务/组织的观点。但是,信任仍然是一个关键问题,需要从一开始就设计。拟议的框架从以人为本的设计方法扩展,强调和维持基于该过程的信任。本文提出了负责人工智能(AI)实施的理论框架。拟议的框架强调了敏捷共同创造过程的协同业务技术方法。目的是简化AI的采用过程来通过在整个项目中参与所有利益相关者来创新和改善业务,以便AI技术的设计,开发和部署与人合作而不是孤立。该框架对基于分析文献综述,概念框架设计和从业者的中介专业知识的负责人AI实施提出了新的观点。该框架强调在以人为以人为中心的设计和敏捷发展中建立和维持信任。这种以人为中心的方式与设计原则的隐私相符和启用。该技术和最终用户的创建者正在共同努力,为业务需求和人类特征定制AI解决方案。关于采用AI来协助医院计划的说明性案例研究将证明该拟议框架适用于现实生活中的应用。
translated by 谷歌翻译
通用数据模型解决了标准化电子健康记录(EHR)数据的许多挑战,但无法将其集成深度表型所需的资源。开放的生物学和生物医学本体论(OBO)铸造本体论提供了可用于生物学知识的语义计算表示,并能够整合多种生物医学数据。但是,将EHR数据映射到OBO Foundry本体论需要大量的手动策展和域专业知识。我们介绍了一个框架,用于将观察性医学成果合作伙伴关系(OMOP)标准词汇介绍给OBO铸造本体。使用此框架,我们制作了92,367条条件,8,615种药物成分和10,673个测量结果的映射。域专家验证了映射准确性,并且在24家医院进行检查时,映射覆盖了99%的条件和药物成分和68%的测量结果。最后,我们证明OMOP2OBO映射可以帮助系统地识别可能受益于基因检测的未诊断罕见病患者。
translated by 谷歌翻译
在许多环境环境中的风险管理需要了解驱动极端事件的机制。量化这种风险的有用指标是响应变量的极端分位数,该变量是基于描述气候,生物圈和环境状态的预测变量的。通常,这些分位数位于可观察数据的范围之内,因此,为了估算,需要在回归框架内规范参数极值模型。在这种情况下,经典方法利用预测变量和响应变量之间的线性或加性关系,并在其预测能力或计算效率中受苦;此外,它们的简单性不太可能捕获导致极端野火创造的真正复杂结构。在本文中,我们提出了一个新的方法学框架,用于使用人工中性网络执行极端分位回归,该网络能够捕获复杂的非线性关系并很好地扩展到高维数据。神经网络的“黑匣子”性质意味着它们缺乏从业者通常会喜欢的可解释性的理想特征。因此,我们将线性和加法模型的各个方面与深度学习相结合,以创建可解释的神经网络,这些神经网络可用于统计推断,但保留了高预测准确性。为了补充这种方法,我们进一步提出了一个新颖的点过程模型,以克服与广义极值分布类别相关的有限的下端问题。我们的统一框架的功效在具有高维预测器集的美国野火数据上说明了,我们说明了基于线性和基于样条的回归技术的预测性能的大幅改进。
translated by 谷歌翻译